ESS 575 Models for Ecological Data

JAGS Primer Answers

March 07, 2017


Motivation for using KnitR

You can complete all the coding exercises in the JAGS Primer using simple R scripts. However, you might be wondering about how we created the JAGS Primer key you are looking at right now. We did this using Yihui Xie’s knitr package in R. This can be a highly useful tools for organizing your Bayesian analyses. Within the same document, you can:

  • Describe the model and specify the joint distribution using \(\LaTeX\) and RMarkdown.
  • Write the R and JAGS code for implementing your model in JAGS.
  • Run the model directly in JAGS.
  • Summarize the convergence diagnostics and present results.
  • Add anything else pertinent to your analysis.

Best of all, knitr can produce beautiful html files as output, which can be easily shared with collaborators. We encourage you to become familiar with knitr. We reccomend Karl Broman’s knitr in a nutshell as an excellent introductory tutorial. You can also open JagsPrimerAnswers.Rmd to see how this html document is generated.


Exercise 1: Writing a DAG

Why does \(x\) fail to appear in the posterior distribution? Draw the Bayesian network for this model.

Answer: There is no \(x\) because we are assuming it is measured without error.

Fig 1. Bayesian network for logistic model.


Exercise 2: Can you improve these priors?

A recurring theme in this course will be to use priors that are informative whenever possible. The gamma priors in equation 3 include the entire number line \(>0\). Don’t we know more about population biology than that? Lets, say for now that we are modeling the population dynamics of a large mammal. How might you go about making the priors on population parameters more informative?

Answer: A great source for priors in biology are allometric scaling relationships that predict all kinds of biological quantities based on the mass organisms (Peters, 1983; Pennycuick,1992). If you know the approximate mass of the animal, you can compute broad but nonetheless informative priors on \(r\) and \(K\). This might leave out the social scientists, but I would trust the scaling of \(r\) for people if not \(K\).

In the absence of some sort of scholarly way to find priors, we can at least constrain them somewhat. There is no way that a large mammal can have an intrinsic rate of increase exceeding 1 – many values for \(r\) within gamma(.001, .001) are far large than than that and hence are complete nonsense. We know \(r\) must be positive and we can put a plausible bound on its upper limit. The only requirement for a vague prior is that its “\(\ldots\) range of uncertainty should be clearly wider that the range of reasonable values of the parameter\(\ldots\)” (Gelman and Hill, 2009, page 355), so we could use \(r\) ~ uniform(0, 2) and be sure that it would be minimally informative. Similarly, we could use experience and knowledge to put some reasonable bounds on \(K\) and even \(\sigma\), which we can use to calculate \(\tau\) as \(\tau=\frac{1}{\sigma^{2}}\).

Peters. The ecological implications of body size. Cambridge University Press, Cambridge, United Kingdom, 1983.

C. J. Pennycuick. Newton rules biology. Oxford University Press, Oxford United Kingdom, 1992.

A. Gelman and J. Hill. Data analysis using regression and multilievel / hierarchical modeling. Cambridge University Press, Cambridge, United Kingdom, 2009.


Exercise 3: Using for loops

Write a code fragment to set vague normal priors for 5 regression coefficients – dnorm(0, 10E-6) – stored in the vector b.

for(i in 1:5){
  b[i] ~ dnorm(0, .000001)
}


Exercise 4: Coding the logistic regression

Write R code (algorithm 3) to run the JAGS model (algorithm 2) and estimate the parameters, \(r\), \(K\) \(\sigma\), and \(\tau\). We suggest you insert the JAGS model into this R script using the sink command as shown in algorithm 4. You will find this a very convenient way to keep all your code in the same R script.

Here is the joint distribution for our logisitic model again, with the priors updated from exercise 2 and \(\tau\) expressed as a derived quantity,

\[\begin{eqnarray} \mu_{i} & = & r-\frac{rx_{i}}{K}\textrm{,}\nonumber\\[1em] \tau & = & \frac{1}{\sigma^{2}}\textrm{,}\nonumber\\[1em] \left[r,K,\sigma\mid\mathbf{y}\right] & \propto & \prod_{i=1}^{n}\textrm{normal}\left(y_{i} \mid \mu_{i},\tau\right)\textrm{uniform}\left(K\mid 0,4000\right) \textrm{uniform}\left(\sigma\mid 0, 2\right) \textrm{uniform}\left(r\mid 0,2\right)\textrm{.}\nonumber\\ \end{eqnarray}\]

We use the sink command to create a JAGS script from our joint distribution. This file is created within R and saved in the working directory. Please note that the outer set of brackets are only required when running this code within an R markdown document (as we did to make this answer key). If you are running them in a plain R script, they are not needed.

It might be useful for you to know that the data were generated from a model with \(r=.2\), \(K=1200\) and \(\sigma = .03\)

{ # Extra bracket needed only for R markdown files
sink("LogisticJAGS.R")
cat(" 
model{
  # priors
  K ~ dunif(0, 4000)
  r ~ dunif (0, 2)
  sigma ~ dunif(0, 2) 
  tau <- 1/sigma^2
  
  # likelihood
  for(i in 1:n){
    mu[i] <- r - r/K * x[i]
    y[i] ~ dnorm(mu[i], tau)
  }
} 
",fill = TRUE)
sink()
} # Extra bracket needed only for R markdown files

Then we run the remaining commands discussed in the JAGS Primer. Note that jm is calling the JAGS script LogisticJAGS.R we just created.

rm(list = ls())

library(ESS575)
library(rjags)
## Loading required package: coda
## Linked to JAGS 4.2.0
## Loaded modules: basemod,bugs
set.seed(1)

Logistic <- Logistic[order(Logistic$PopulationSize),]

inits = list(
  list(K = 1500, r = .2, sigma = 1),
  list(K = 1000, r = .15, sigma = .1),
  list(K = 900, r = .3, sigma = .01))

data = list(
  n = nrow(Logistic),
  x = as.double(Logistic$PopulationSize),
  y = as.double(Logistic$GrowthRate))

n.adapt = 5000
n.update = 10000
n.iter = 10000

jm = jags.model("LogisticJAGS.R", data = data, inits = inits, n.chains = length(inits), n.adapt = n.adapt)
## Compiling model graph
##    Resolving undeclared variables
##    Allocating nodes
## Graph information:
##    Observed stochastic nodes: 50
##    Unobserved stochastic nodes: 3
##    Total graph size: 217
## 
## Initializing model
update(jm, n.iter = n.update)
zm = coda.samples(jm, variable.names = c("K", "r", "sigma", "tau"), n.iter = n.iter, n.thin = 1)
summary(zm)
## 
## Iterations = 15001:25000
## Thinning interval = 1 
## Number of chains = 3 
## Sample size per chain = 10000 
## 
## 1. Empirical mean and standard deviation for each variable,
##    plus standard error of the mean:
## 
##            Mean        SD  Naive SE Time-series SE
## K     1.238e+03 6.288e+01 3.630e-01      7.539e-01
## r     2.006e-01 9.773e-03 5.643e-05      1.143e-04
## sigma 2.867e-02 3.033e-03 1.751e-05      2.522e-05
## tau   1.257e+03 2.594e+02 1.498e+00      2.042e+00
## 
## 2. Quantiles for each variable:
## 
##            2.5%       25%       50%       75%     97.5%
## K     1.131e+03 1.195e+03 1233.0596 1.275e+03 1375.9685
## r     1.814e-01 1.941e-01    0.2006 2.071e-01    0.2199
## sigma 2.345e-02 2.654e-02    0.0284 3.052e-02    0.0354
## tau   7.980e+02 1.073e+03 1239.4604 1.420e+03 1819.0041


Exercise 5: Coding the logistic regression to run in parallel (optional)

Append R code (algorithm 5) to the script you made in exercise 4 to run the JAGS model (algorithm 2) in parallel and estimate the parameters, \(r\), \(K\) \(\sigma\), and \(\tau\). Use the proc.time function in R to compare the time required for the sequential and parallel JAGS run. If your computer has 3 cores, try running only 2 chains in parallel when doing this exercise. If you have fewer than 3 cores, work with a classmate that has at least 3 cores.

We create a function called initFunc to randomly draw values from a portion of each parameter’s support. We will use this function to provide initial values for each chain. We test out initFunc by running it a couple of times.

initFunc <- function (){
return(list(
  K = runif(1, 10, 4000),
  r = runif(1, .01, 2),
  sigma = runif(1, 0, 2)))}

initFunc()
## $K
## [1] 1069.38
## 
## $r
## [1] 0.7505266
## 
## $sigma
## [1] 1.145707
initFunc()
## $K
## [1] 3633.749
## 
## $r
## [1] 0.411347
## 
## $sigma
## [1] 1.796779

Now we run the model in parallel using the code from algorithm 5. We use the proc.time function to see how long JAGS takes to run the Logistics model in parallel.

# run JAGS model in parallel
library(parallel)
detectCores()
## [1] 8
cl <- makeCluster(3) # Here we use three cores
clusterExport(cl, c("data", "initFunc", "n.adapt", "n.update", "n.iter")) 

ptm <- proc.time()
out <- clusterEvalQ(cl, {
  library(rjags)
  set.seed(1)
  jm = jags.model("LogisticJAGS.R", data = data, inits = initFunc(), 
  n.chains = 1, n.adapt = n.adapt)
  update(jm, n.iter = n.update)
  zmCore = coda.samples(jm, variable.names = c("K", "r", "sigma", "tau"), 
  n.iter = n.iter, thin = 1)
  return(as.mcmc(zmCore))
})
ParallelTime <- proc.time() - ptm
ParallelTime
##    user  system elapsed 
##   0.003   0.000   1.242
stopCluster(cl)
zmP <- mcmc.list(out)

We rerun the model sequentially and use proc.time again for comparison.

ptm <- proc.time()
jm = jags.model("LogisticJAGS.R", data = data, inits = inits, n.chains = length(inits), n.adapt = n.adapt)
## Compiling model graph
##    Resolving undeclared variables
##    Allocating nodes
## Graph information:
##    Observed stochastic nodes: 50
##    Unobserved stochastic nodes: 3
##    Total graph size: 217
## 
## Initializing model
update(jm, n.iter = n.update)
zm = coda.samples(jm, variable.names = c("K", "r", "sigma", "tau"), n.iter = n.iter, n.thin = 1)
SequentialTime <- proc.time() - ptm
SequentialTime
##    user  system elapsed 
##   4.144   0.008   4.154

Looks like the parallel model runs 3.34 times faster. This factor should increase the more iterations you run (why?) to a limit of 3.


Exercise 6: Summarizing coda objects

Build a table that contains the mean, standard deviation, median and upper and lower 2.5% CI for the parameters from the logistic model. Output your table with 3 significant digits to .csv file readable by Excel (Hint: see the signif() function).

a <- signif(as.data.frame(summary(zm)$stat[, 1:2]), digits = 3)
b <- signif(as.data.frame(summary(zm)$quantile[, c(1, 3, 5)]), digits = 3)
LogisticParameters <- cbind(rownames(a), a, b)
rownames(LogisticParameters) <- c()
names(LogisticParameters) <- c("parameter", "mean", "standardDeviation", "lower95", "median", "upper95")
LogisticParameters
##   parameter     mean standardDeviation  lower95   median  upper95
## 1         K 1.24e+03          6.31e+01 1.13e+03 1.23e+03 1.38e+03
## 2         r 2.01e-01          9.76e-03 1.82e-01 2.01e-01 2.20e-01
## 3     sigma 2.87e-02          3.04e-03 2.35e-02 2.84e-02 3.54e-02
## 4       tau 1.25e+03          2.59e+02 7.99e+02 1.24e+03 1.81e+03
write.csv(LogisticParameters, file = "LogisticParameters.csv")


Exercise 7: Understanding coda objects

Modify your code to produce a coda object with 3 chains called zm.short, setting n.adapt = 500, n.update = 500, and n.iter = 20.

  1. Output the estimate of \(\sigma\) for the third iteration from the second chain.
  2. Output all of the estimates of \(r\) from the first chain.
  3. Verify your answers by printing the entire chain, i.e. enter zm.short at the console.
n.adapt = 500
n.update = 500
n.iter = 20

jm.short = jags.model("LogisticJAGS.R", data = data, inits = inits, n.chains = length(inits), n.adapt = n.adapt)
## Compiling model graph
##    Resolving undeclared variables
##    Allocating nodes
## Graph information:
##    Observed stochastic nodes: 50
##    Unobserved stochastic nodes: 3
##    Total graph size: 217
## 
## Initializing model
update(jm.short, n.iter = n.update)
zm.short = coda.samples(jm.short, variable.names = c("K", "r", "sigma", "tau"), n.iter = n.iter, n.thin = 1)
zm.short[[2]][3,3] # third iteration from second chain for sigma 
##      sigma 
## 0.03169468
zm.short[[1]][,2] # all estimates of r from first chain
## Markov Chain Monte Carlo (MCMC) output:
## Start = 1001 
## End = 1020 
## Thinning interval = 1 
##  [1] 0.2030203 0.2108610 0.2202984 0.2189606 0.2078010 0.2008283 0.1961416
##  [8] 0.1959804 0.2015194 0.2188144 0.2048849 0.1867363 0.1925875 0.1955791
## [15] 0.1957873 0.2034632 0.1941668 0.2004273 0.1993938 0.1917201
zm.short
## [[1]]
## Markov Chain Monte Carlo (MCMC) output:
## Start = 1001 
## End = 1020 
## Thinning interval = 1 
##              K         r      sigma       tau
##  [1,] 1181.653 0.2030203 0.02404521 1729.5884
##  [2,] 1235.919 0.2108610 0.03156689 1003.5439
##  [3,] 1209.245 0.2202984 0.02936133 1159.9746
##  [4,] 1173.462 0.2189606 0.02973226 1131.2126
##  [5,] 1186.633 0.2078010 0.03252151  945.4937
##  [6,] 1191.945 0.2008283 0.03339594  896.6286
##  [7,] 1203.969 0.1961416 0.03030680 1088.7291
##  [8,] 1191.896 0.1959804 0.02979303 1126.6025
##  [9,] 1201.578 0.2015194 0.02506923 1591.1756
## [10,] 1197.818 0.2188144 0.02979680 1126.3176
## [11,] 1202.582 0.2048849 0.02979170 1126.7028
## [12,] 1290.592 0.1867363 0.02691262 1380.6636
## [13,] 1295.286 0.1925875 0.02779452 1294.4389
## [14,] 1285.023 0.1955791 0.02784065 1290.1529
## [15,] 1263.384 0.1957873 0.02681210 1391.0356
## [16,] 1304.803 0.2034632 0.02567445 1517.0426
## [17,] 1223.385 0.1941668 0.03049932 1075.0277
## [18,] 1238.984 0.2004273 0.03178119  990.0555
## [19,] 1210.264 0.1993938 0.02703035 1368.6629
## [20,] 1313.790 0.1917201 0.02513456 1582.9140
## 
## [[2]]
## Markov Chain Monte Carlo (MCMC) output:
## Start = 1001 
## End = 1020 
## Thinning interval = 1 
##              K         r      sigma       tau
##  [1,] 1259.236 0.1987620 0.02934469 1161.2906
##  [2,] 1296.763 0.1782903 0.02826893 1251.3568
##  [3,] 1361.928 0.1808955 0.03169468  995.4679
##  [4,] 1341.617 0.1928334 0.02663881 1409.1928
##  [5,] 1329.236 0.1800192 0.02653763 1419.9583
##  [6,] 1307.419 0.1854929 0.02474384 1633.3000
##  [7,] 1329.466 0.1883291 0.02707302 1364.3525
##  [8,] 1260.794 0.2078104 0.02699141 1372.6155
##  [9,] 1215.190 0.1978350 0.02504065 1594.8100
## [10,] 1252.989 0.2026278 0.02612994 1464.6136
## [11,] 1190.874 0.2154729 0.02303833 1884.0747
## [12,] 1164.754 0.1959241 0.02600548 1478.6662
## [13,] 1272.806 0.1976314 0.02665192 1407.8067
## [14,] 1197.249 0.2104054 0.02854932 1226.8977
## [15,] 1186.206 0.2095827 0.02748301 1323.9499
## [16,] 1197.686 0.2062743 0.03478085  826.6463
## [17,] 1232.439 0.2067848 0.03155640 1004.2112
## [18,] 1233.755 0.2092772 0.02854276 1227.4624
## [19,] 1260.815 0.2016824 0.02396908 1740.5932
## [20,] 1256.279 0.2028321 0.03251925  945.6253
## 
## [[3]]
## Markov Chain Monte Carlo (MCMC) output:
## Start = 1001 
## End = 1020 
## Thinning interval = 1 
##              K         r      sigma       tau
##  [1,] 1226.462 0.2014688 0.02563643 1521.5452
##  [2,] 1223.745 0.2005959 0.02878547 1206.8500
##  [3,] 1239.955 0.1964147 0.03268001  936.3442
##  [4,] 1287.268 0.1849378 0.03336379  898.3573
##  [5,] 1285.660 0.1837527 0.03266691  937.0957
##  [6,] 1375.630 0.1927980 0.03318887  907.8522
##  [7,] 1383.053 0.1927733 0.03195796  979.1333
##  [8,] 1356.639 0.1934964 0.02987360 1120.5332
##  [9,] 1319.808 0.1904076 0.03017143 1098.5208
## [10,] 1281.788 0.1885839 0.02713142 1358.4854
## [11,] 1398.285 0.1864175 0.02899040 1189.8485
## [12,] 1320.734 0.1965401 0.02913553 1178.0244
## [13,] 1231.152 0.1949804 0.03286307  925.9420
## [14,] 1256.216 0.1892845 0.03561903  788.1987
## [15,] 1332.042 0.1909689 0.02846402 1234.2621
## [16,] 1322.767 0.1905138 0.02875841 1209.1224
## [17,] 1279.100 0.1871464 0.02947830 1150.7872
## [18,] 1370.564 0.1964373 0.03139866 1014.3267
## [19,] 1245.975 0.1987005 0.03006883 1106.0303
## [20,] 1165.587 0.1989103 0.02571081 1512.7543
## 
## attr(,"class")
## [1] "mcmc.list"


Exercise 8: Convert the zm object to a data frame

Using the elements of data frame (not zm) as input to functions:

  1. Find the maximum value of \(\sigma\).
  2. Estimate the mean of \(r\) for the first 1000 and last 1000 iterations in the chain.
  3. Produce a publication quality plot of the posterior density of \(K\).
  4. Estimate the probability that the parameter \(K\) exceeds 1600 and the probability that \(K\) falls between 1000 and 1300. (Hint: look into using the ecdf function.)
df = as.data.frame(rbind(zm[[1]], zm[[2]], zm[[3]]))
# Find the maximum value of sigma
max(df$sigma)
## [1] 0.04593981
# Find the mean of r for the first 1000 iterations
mean(df$r[1: 1000])
## [1] 0.20099
# Find the mean of r for the first 1000 iterations
nr = length(df$r)
mean(df$r[(nr - 1000): nr]) 
## [1] 0.2011763
plot(density(df$K), main = "", xlim=c(800, 2000), xlab = "K") 
Fig. 2. Posterior density of K.

Fig. 2. Posterior density of K.

# Find the probability that the parameter K exceeds 1600
1 - ecdf(df$K)(1600)
## [1] 0.0001333333
# Find the probability that the parameter 1000 < K < 1300 
ecdf(df$K)(1300) - ecdf(df$K)(1000)
## [1] 0.8486667


Exercise 9: Vectors in coda objects

Modify your code to include estimates of \(\mu\) and summarize the coda object zm. What if you wanted to plot the model predictions with 95% credible intervals against the data. How would you do that?

Answer: There are several ways this can be done, but the general idea is that you need to extract the rows of the coda object that contain the quantiles for \(\mu\), which can be tedious and error prone. For example, if you use rows in the summary table and add or subtract parameters to be estimated, then your row counts will be off. There are ways to use rownames, but a far better way to plot vectors is described in the section on JAGS objects.

n.adapt = 5000
n.update = 10000
n.iter = 10000
jm = jags.model("LogisticJAGS.R", data = data, inits = inits, n.chains = length(inits), n.adapt = n.adapt)
## Compiling model graph
##    Resolving undeclared variables
##    Allocating nodes
## Graph information:
##    Observed stochastic nodes: 50
##    Unobserved stochastic nodes: 3
##    Total graph size: 217
## 
## Initializing model
update(jm, n.iter = n.update)
zm = coda.samples(jm, variable.names = c("K", "r", "sigma", "mu"), n.iter = n.iter, n.thin = 1)
zj = jags.samples(jm, variable.names = c("K", "r", "sigma", "mu"), n.iter = n.iter, n.thin = 1)
mu <- as.data.frame(summary(zm)$quantile[2:51, c(1, 3, 5)]) # <- this is an easy place to make a mistake
names(mu) <- c("lower95", "median", "upper95")
Logistic2 <- cbind(mu, Logistic)
plot(Logistic2$PopulationSize, Logistic2$GrowthRate, xlab = "N", ylab = "Per capita growth rate")
lines(Logistic2$PopulationSize, Logistic2$median)
lines(Logistic2$PopulationSize, Logistic2$lower95, lty = "dashed")
lines(Logistic2$PopulationSize, Logistic2$upper95, lty = "dashed")
Fig. 3. Median and 95% credible intervals for predicted growth rate.

Fig. 3. Median and 95% credible intervals for predicted growth rate.


Exercise 10: Making plots with JAGS objects

For the logistic model:

  1. Plot the observations of growth rate as a function of observed population size.
  2. Overlay the median of the model predictions as a solid line.
  3. Overlay the 95% credible intervals as dashed lines.
  4. Prepare a separate plot of the posterior density of \(K\).

For convenience, we created the JAGS object zj in exercise 9. Here we use it to make plots.

mu <- summary(zj$mu, quantile, c(.025, .5, .975))$stat # <- notice this is harder to mess up!
par(mfrow = c(1, 2))
plot(Logistic$PopulationSize, Logistic$GrowthRate, xlab = "N", ylab = "Per capita growth rate")
lines(Logistic$PopulationSize, mu[2,])
lines(Logistic$PopulationSize, mu[1,], lty = "dashed")
lines(Logistic$PopulationSize, mu[3,], lty = "dashed")
plot(density(zj$K), main = "", xlim=c(800, 2000), xlab = "K") 
Fig. 4. Median and 95% credible intervals for predicted growth rate and posterior density of K.

Fig. 4. Median and 95% credible intervals for predicted growth rate and posterior density of K.


Exercise 11: Summarizing the JAGS object

  1. Calculate the median of the second chain for \(K\).
  2. Calculate the upper and lower 95% quantiles for the 16th estimate of \(\mu\) without using the summary function.
  3. Calculate the probability that the 16th estimate of \(\mu < 0\).
summary(zj$K, median)$stat
## [1] 1233.173
quantile(zj$mu[16,,], c(.025, .975))
##      2.5%     97.5% 
## 0.1179175 0.1362593
ecdf(zj$mu[16,,])(0)
## [1] 0


Exercise 12: Assessing convergence

Rerun the logistic model with n.adapt = 100. Then do the following:

  1. Keep the next 100 iterations. Assess convergence visually with traceplot and with the Gelman-Rubin, Heidelberger and Welch, and Raftery diagnostics.
  2. Update another 500 iterations and then keep 500 more iterations. Repeat your assessment of convergence.
  3. Repeat steps 1 and 2 until you feel you have reached convergence.
  4. Change the adapt phase to zero and repeat steps 1 – 4. What happens?
set.seed(1)
n.adapt = 100
jm.short = jags.model("LogisticJAGS.R", data = data, inits = inits, n.chains = length(inits), n.adapt = n.adapt)
## Compiling model graph
##    Resolving undeclared variables
##    Allocating nodes
## Graph information:
##    Observed stochastic nodes: 50
##    Unobserved stochastic nodes: 3
##    Total graph size: 217
## 
## Initializing model
n.iter = 100
zm.short = coda.samples(jm.short, variable.names = c("K", "r", "sigma", "tau"), n.iter = n.iter, n.thin = 1)
plot(zm.short)

gelman.diag(zm.short)
## Potential scale reduction factors:
## 
##       Point est. Upper C.I.
## K           1.06       1.21
## r           1.10       1.30
## sigma       1.02       1.05
## tau         1.01       1.03
## 
## Multivariate psrf
## 
## 1.08
heidel.diag(zm.short)
## [[1]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed        1        0.1290 
## r     failed       NA        0.0401 
## sigma passed        1        0.9767 
## tau   passed        1        0.9490 
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.26e+03 3.03e+01 
## r     <NA>            NA       NA 
## sigma passed    2.82e-02 7.42e-04 
## tau   passed    1.30e+03 5.73e+01 
## 
## [[2]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed       1         0.439  
## r     passed       1         0.921  
## sigma passed       1         0.451  
## tau   passed       1         0.513  
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.24e+03 5.02e+01 
## r     passed    2.01e-01 5.17e-03 
## sigma passed    2.89e-02 7.96e-04 
## tau   passed    1.24e+03 7.05e+01 
## 
## [[3]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     failed       NA        0.00132
## r     failed       NA        0.03980
## sigma passed       11        0.06082
## tau   passed        1        0.05201
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     <NA>            NA       NA 
## r     <NA>            NA       NA 
## sigma passed    2.81e-02 6.27e-04 
## tau   passed    1.28e+03 6.54e+01
raftery.diag(zm.short)
## [[1]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
## 
## You need a sample size of at least 3746 with these values of q, r and s
## 
## [[2]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
## 
## You need a sample size of at least 3746 with these values of q, r and s
## 
## [[3]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
## 
## You need a sample size of at least 3746 with these values of q, r and s
n.update = 500
update(jm.short, n.iter = n.update)
n.iter = 500
zm.short = coda.samples(jm.short, variable.names = c("K", "r", "sigma", "tau"), n.iter = n.iter, n.thin = 1)
plot(zm.short)

gelman.diag(zm.short)
## Potential scale reduction factors:
## 
##       Point est. Upper C.I.
## K           1.02       1.03
## r           1.01       1.01
## sigma       1.00       1.02
## tau         1.00       1.01
## 
## Multivariate psrf
## 
## 1.01
heidel.diag(zm.short)
## [[1]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed       1         0.703  
## r     passed       1         0.541  
## sigma passed       1         0.354  
## tau   passed       1         0.343  
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1244.883 1.78e+01 
## r     passed       0.200 2.14e-03 
## sigma passed       0.029 4.74e-04 
## tau   passed    1233.463 3.96e+01 
## 
## [[2]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed       51        0.1310 
## r     passed       51        0.0892 
## sigma passed        1        0.7220 
## tau   passed        1        0.8296 
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.23e+03 2.09e+01 
## r     passed    2.01e-01 2.70e-03 
## sigma passed    2.86e-02 3.86e-04 
## tau   passed    1.26e+03 3.25e+01 
## 
## [[3]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed       1         0.243  
## r     passed       1         0.599  
## sigma passed       1         0.808  
## tau   passed       1         0.874  
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.24e+03 1.08e+01 
## r     passed    2.01e-01 1.61e-03 
## sigma passed    2.86e-02 3.24e-04 
## tau   passed    1.26e+03 2.75e+01
raftery.diag(zm.short)
## [[1]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
## 
## You need a sample size of at least 3746 with these values of q, r and s
## 
## [[2]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
## 
## You need a sample size of at least 3746 with these values of q, r and s
## 
## [[3]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
## 
## You need a sample size of at least 3746 with these values of q, r and s
n.update = 500
update(jm.short, n.iter = n.update)
n.iter = 500
zm.short = coda.samples(jm.short, variable.names = c("K", "r", "sigma", "tau"), n.iter = n.iter, n.thin = 1)
plot(zm.short)

gelman.diag(zm.short)
## Potential scale reduction factors:
## 
##       Point est. Upper C.I.
## K           1.00       1.01
## r           1.01       1.03
## sigma       1.00       1.00
## tau         1.00       1.00
## 
## Multivariate psrf
## 
## 1.01
heidel.diag(zm.short)
## [[1]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed       1         0.529  
## r     passed       1         0.518  
## sigma passed       1         0.659  
## tau   passed       1         0.326  
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.23e+03 10.46395 
## r     passed    2.02e-01  0.00143 
## sigma passed    2.86e-02  0.00034 
## tau   passed    1.27e+03 27.42649 
## 
## [[2]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed       1         0.998  
## r     passed       1         0.950  
## sigma passed       1         0.559  
## tau   passed       1         0.553  
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.23e+03 1.59e+01 
## r     passed    2.01e-01 1.56e-03 
## sigma passed    2.85e-02 3.24e-04 
## tau   passed    1.27e+03 2.74e+01 
## 
## [[3]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed        1        0.294  
## r     passed        1        0.257  
## sigma passed       51        0.140  
## tau   passed        1        0.139  
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.23e+03 1.15e+01 
## r     passed    2.00e-01 1.71e-03 
## sigma passed    2.84e-02 3.35e-04 
## tau   passed    1.27e+03 2.68e+01
raftery.diag(zm.short)
## [[1]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
## 
## You need a sample size of at least 3746 with these values of q, r and s
## 
## [[2]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
## 
## You need a sample size of at least 3746 with these values of q, r and s
## 
## [[3]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
## 
## You need a sample size of at least 3746 with these values of q, r and s
#Final run--you know this has converged!
n.update = 5000
update(jm.short, n.iter = n.update)
n.iter = 10000
zm.short = coda.samples(jm.short, variable.names = c("K", "r", "sigma", "tau"), n.iter = n.iter, n.thin = 1)
plot(zm.short)

gelman.diag(zm.short)
## Potential scale reduction factors:
## 
##       Point est. Upper C.I.
## K              1          1
## r              1          1
## sigma          1          1
## tau            1          1
## 
## Multivariate psrf
## 
## 1
heidel.diag(zm.short)
## [[1]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed       1         0.264  
## r     passed       1         0.153  
## sigma passed       1         0.124  
## tau   passed       1         0.113  
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.24e+03 3.25e+00 
## r     passed    2.01e-01 4.35e-04 
## sigma passed    2.86e-02 7.76e-05 
## tau   passed    1.26e+03 6.68e+00 
## 
## [[2]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed       1         0.641  
## r     passed       1         0.527  
## sigma passed       1         0.294  
## tau   passed       1         0.186  
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.24e+03 4.81e+00 
## r     passed    2.01e-01 5.78e-04 
## sigma passed    2.87e-02 8.05e-05 
## tau   passed    1.25e+03 6.59e+00 
## 
## [[3]]
##                                     
##       Stationarity start     p-value
##       test         iteration        
## K     passed       1         0.726  
## r     passed       1         0.833  
## sigma passed       1         0.643  
## tau   passed       1         0.626  
##                                   
##       Halfwidth Mean     Halfwidth
##       test                        
## K     passed    1.24e+03 2.88e+00 
## r     passed    2.01e-01 3.72e-04 
## sigma passed    2.87e-02 7.98e-05 
## tau   passed    1.26e+03 6.35e+00
raftery.diag(zm.short)
## [[1]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
##                                              
##        Burn-in  Total Lower bound  Dependence
##        (M)      (N)   (Nmin)       factor (I)
##  K     10       11820 3746         3.16      
##  r     10       10984 3746         2.93      
##  sigma 4        4752  3746         1.27      
##  tau   5        5973  3746         1.59      
## 
## 
## [[2]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
##                                              
##        Burn-in  Total Lower bound  Dependence
##        (M)      (N)   (Nmin)       factor (I)
##  K     10       10910 3746         2.91      
##  r     12       15558 3746         4.15      
##  sigma 4        5123  3746         1.37      
##  tau   6        6816  3746         1.82      
## 
## 
## [[3]]
## 
## Quantile (q) = 0.025
## Accuracy (r) = +/- 0.005
## Probability (s) = 0.95 
##                                              
##        Burn-in  Total Lower bound  Dependence
##        (M)      (N)   (Nmin)       factor (I)
##  K     8        8518  3746         2.27      
##  r     8        9508  3746         2.54      
##  sigma 4        5210  3746         1.39      
##  tau   6        7002  3746         1.87